Goto

Collaborating Authors

 effective altruism


Oxford shuts down institute run by Elon Musk-backed philosopher

The Guardian

Oxford University this week shut down an academic institute run by one of Elon Musk's favorite philosophers. The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated 1m to the FIH in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter. The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. OpenAI chief executive Sam Altman, Microsoft founder Bill Gates and Tesla chief Musk all wrote blurbs for his 2014 bestselling book Superintelligence.


Elie Hassenfeld Q&A: ' 5,000 to Save a Life Is a Bargain'

WIRED

When the board of OpenAI staged a bum mutiny last November, throwing out the company's leadership only to have the bosses return while board members were pressured to resign, something seemed rotten in the state of effective altruism. Nominally, OpenAI's mission had been to ensure that AI "benefits all of humanity." Fiduciarily, OpenAI's mission is to benefit the subset of humanity with a stake in OpenAI. And then, of course, there was Sam Bankman-Fried, the felonious altruist who argued in court last fall that his sordid crypto exchange was in fact a noble exercise in earning-to-give--making Midas money, sure, but only to funnel it to the global poor. This week he's facing a prison sentence of up to 50 years, which his legal team has complained paints him as a "depraved super-villain."


The Shocking Drama at OpenAI Isn't As Stupid As It Looks

Slate

The confounding saga of Sam Altman's sudden, shocking expulsion from OpenAI on Friday, followed by last-ditch attempts from investors and loyalists to reinstate him over the weekend, appears to have ended right where it started: with Altman and former OpenAI co-founder/president/board member Greg Brockman out for good. But there's a twist: Microsoft, which has been OpenAI's cash-and-infrastructure backer for years, announced early Monday morning that it was hiring Altman and Brockman "to lead a new advanced AI research team." In a follow-up tweet, Microsoft CEO Satya Nadella declared that Altman would become chief executive of this team, which would take the shape of an "independent" entity within Microsoft, operating something like company subsidiaries GitHub and LinkedIn. Notably, per Brockman, this new entity will be led by himself, Altman, and the first three employees who'd quit OpenAI Friday night in protest of how those two had been treated. I'm super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation.


Effective Altruism Is Pushing a Dangerous Brand of 'AI Safety'

WIRED

Throughout my two decades in Silicon Valley, I have seen effective altruism (EA)--a movement consisting of an overwhelmingly white male group based largely out of Oxford University and Silicon Valley--gain alarming levels of influence. EA is currently being scrutinized due to its association with Sam Bankman-Fried's crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of "AI safety." EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to figure out how to benefit others as much as possible." And "evidence and reason" have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, EA's career advice center, 80,000 hours, lists "AI safety technical research" and "shaping future governance of AI" as the top two recommended careers for EAs to go into, and the billionaire EA class funds initiatives attempting to stop an AGI apocalypse.


Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI – POLITICO

#artificialintelligence

A lobby group backed by Elon Musk and associated with a controversial ideology popular among tech billionaires is fighting to prevent killer robots from terminating humanity, and it's taken hold of Europe's Artificial Intelligence Act to do so. The Future of Life Institute (FLI) has over the past year made itself a force of influence on some of the AI Act's most contentious elements. Despite the group's links to Silicon Valley, Big Tech giants like Google and Microsoft have found themselves on the losing side of FLI's arguments. In the EU bubble, the arrival of a group whose actions are colored by fear of AI-triggered catastrophe rather than run-of-the-mill consumer protection concerns was received like a spaceship alighting in the Schuman roundabout. Some worry that the institute embodies a techbro-ish anxiety about low-probability threats that could divert attention from more immediate problems.


Power-hungry robots, space colonization, cyborgs: inside the bizarre world of 'longtermism'

The Guardian

Most of us don't think of power-hungry killer robots as an imminent threat to humanity, especially when poverty and the climate crisis are already ravaging the Earth. This wasn't the case for Sam Bankman-Fried and his followers, powerful actors who have embraced a school of thought within the effective altruism movement called "longtermism". In February, the Future Fund, a philanthropic organization endowed by the now-disgraced cryptocurrency entrepreneur, announced that it would be disbursing more than $100m – and possibly up to $1bn – this year on projects to "improve humanity's long-term prospects". The slightly cryptic reference might have been a bit puzzling to those who think of philanthropy as funding homelessness charities and medical NGOs in the developing world. In fact, the Future Fund's particular areas of interest include artificial intelligence, biological weapons and "space governance", a mysterious term referring to settling humans in space as a potential "watershed moment in human history".


Why the collapse of Sam Bankman-Fried's FTX has split A.I. researchers

#artificialintelligence

First, we need to clear up terminology, like A.I. Safety, which sounds like a completely neutral, uncontroversial term. Who wouldn't want safe A.I. software? And you might think that the definition of A.I. "safety" would include A.I. that isn't racist or sexist or is used to abet genocide. All of which, by the way, are actual, documented concerns about today's existing A.I. software. Yet actually, none of those concerns are what A.I. researchers generally mean when they talk about "A.I. Instead, those things fall into the camp of "A.I.


Inside effective altruism, where the far future counts a lot more than the present

MIT Technology Review

Even during an actual pandemic, Flynn's focus struck many Oregonians as far-fetched and foreign. Perhaps unsurprisingly, he ended up losing the 2022 primary to the more politically experienced Democrat, Andrea Salinas. But despite Flynn's lackluster showing, he made history as effective altruism's first political candidate to run for office. Since its birth in the late 2000s, effective altruism has aimed to answer the question "How can those with means have the most impact on the world in a quantifiable way?"--and supplied clear methodologies for calculating the answer. Directing money to organizations that use evidence-based approaches is the one technique EA is most known for.


William MacAskill: 'There are 80 trillion people yet to come. They need us to start protecting them'

The Guardian

Although most cultures, particularly in the west, provide a great many commemorations of distant ancestors – statues, portraits, buildings – we are much less willing to consider our far-off descendants. We might invoke grandchildren, at a push great-grandchildren, but after that, it all becomes a bit vague and, well, unimaginable. And while we look with awe and fascination at the Egyptian pyramids, built 5,000 years ago, we seem incapable of thinking, or even contemplating, 5,000 years in the future. That lies in the realm of science fiction, which is tantamount to fantasy. But the chances are, barring a global catastrophe, humanity will still be very much around in 5,000 years, and going by the average existence of mammal species, should still be thriving in 500,000 years. If we play our cards right, we could even be here in 5m or 500m years, which means that there may be thousands or even millions times more human beings to come than have already existed.